175 research outputs found

    Hard isogeny problems over RSA moduli and groups with infeasible inversion

    Get PDF
    We initiate the study of computational problems on elliptic curve isogeny graphs defined over RSA moduli. We conjecture that several variants of the neighbor-search problem over these graphs are hard, and provide a comprehensive list of cryptanalytic attempts on these problems. Moreover, based on the hardness of these problems, we provide a construction of groups with infeasible inversion, where the underlying groups are the ideal class groups of imaginary quadratic orders. Recall that in a group with infeasible inversion, computing the inverse of a group element is required to be hard, while performing the group operation is easy. Motivated by the potential cryptographic application of building a directed transitive signature scheme, the search for a group with infeasible inversion was initiated in the theses of Hohenberger and Molnar (2003). Later it was also shown to provide a broadcast encryption scheme by Irrer et al. (2004). However, to date the only case of a group with infeasible inversion is implied by the much stronger primitive of self-bilinear map constructed by Yamakawa et al. (2014) based on the hardness of factoring and indistinguishability obfuscation (iO). Our construction gives a candidate without using iO.Comment: Significant revision of the article previously titled "A Candidate Group with Infeasible Inversion" (arXiv:1810.00022v1). Cleared up the constructions by giving toy examples, added "The Parallelogram Attack" (Sec 5.3.2). 54 pages, 8 figure

    Hiding secrets in public random functions

    Full text link
    Constructing advanced cryptographic applications often requires the ability of privately embedding messages or functions in the code of a program. As an example, consider the task of building a searchable encryption scheme, which allows the users to search over the encrypted data and learn nothing other than the search result. Such a task is achievable if it is possible to embed the secret key of an encryption scheme into the code of a program that performs the "decrypt-then-search" functionality, and guarantee that the code hides everything except its functionality. This thesis studies two cryptographic primitives that facilitate the capability of hiding secrets in the program of random functions. 1. We first study the notion of a private constrained pseudorandom function (PCPRF). A PCPRF allows the PRF master secret key holder to derive a public constrained key that changes the functionality of the original key without revealing the constraint description. Such a notion closely captures the goal of privately embedding functions in the code of a random function. Our main contribution is in constructing single-key secure PCPRFs for NC^1 circuit constraints based on the learning with errors assumption. Single-key secure PCPRFs were known to support a wide range of cryptographic applications, such as private-key deniable encryption and watermarking. In addition, we build reusable garbled circuits from PCPRFs. 2. We then study how to construct cryptographic hash functions that satisfy strong random oracle-like properties. In particular, we focus on the notion of correlation intractability, which requires that given the description of a function, it should be hard to find an input-output pair that satisfies any sparse relations. Correlation intractability captures the security properties required for, e.g., the soundness of the Fiat-Shamir heuristic, where the Fiat-Shamir transformation is a practical method of building signature schemes from interactive proof protocols. However, correlation intractability was shown to be impossible to achieve for certain length parameters, and was widely considered to be unobtainable. Our contribution is in building correlation intractable functions from various cryptographic assumptions. The security analyses of the constructions use the techniques of secretly embedding constraints in the code of random functions

    Practically Solving LPN in High Noise Regimes Faster Using Neural Networks

    Get PDF
    We conduct a systematic study of solving the learning parity with noise problem (LPN) using neural networks. Our main contribution is designing families of two-layer neural networks that practically outperform classical algorithms in high-noise, low-dimension regimes. We consider three settings where the numbers of LPN samples are abundant, very limited, and in between. In each setting we provide neural network models that solve LPN as fast as possible. For some settings we are also able to provide theories that explain the rationale of the design of our models. Comparing with the previous experiments of Esser, Kubler, and May (CRYPTO 2017), for dimension n=26n = 26, noise rate Ï„=0.498\tau = 0.498, the ''Guess-then-Gaussian-elimination'' algorithm takes 3.12 days on 64 CPU cores, whereas our neural network algorithm takes 66 minutes on 8 GPUs. Our algorithm can also be plugged into the hybrid algorithms for solving middle or large dimension LPN instances.Comment: 37 page

    Centers of discontinuous piecewise smooth quasi-homogeneous polynomial differential systems

    Get PDF
    In this paper we investigate the center problem for the discontinuous piecewise smooth quasi-homogeneous but non-homogeneous polynomial differential systems. First, we provide sufficient and necessary conditions for the existence of a center in the discontinuous piecewise smooth quasi-homogeneous polynomial differential systems. Moreover, these centers are global, and the period function of their periodic orbits is monotonic. Second, we characterize the centers of the discontinuous piecewise smooth quasi-homogeneous cubic and quartic polynomial differential systems

    An Alternative View of the Graph-Induced Multilinear Maps

    Get PDF
    In this paper, we view multilinear maps through the lens of ``homomorphic obfuscation . In specific, we show how to homomorphically obfuscate the kernel-test and affine subspace-test functionalities of high dimensional matrices. Namely, the evaluator is able to perform additions and multiplications over the obfuscated matrices, and test subspace memberships on the resulting code. The homomorphic operations are constrained by the prescribed data structure, e.g. a tree or a graph, where the matrices are stored. The security properties of all the constructions are based on the hardness of Learning with errors problem (LWE). The technical heart is to ``control the ``chain reactions\u27\u27 over a sequence of LWE instances. Viewing the homomorphic obfuscation scheme from a different angle, it coincides with the graph-induced multilinear maps proposed by Gentry, Gorbunov and Halevi (GGH15). Our proof technique recognizes several ``safe modes of GGH15 that are not known before, including a simple special case: if the graph is acyclic and the matrices are sampled independently from binary or error distributions, then the encodings of the matrices are pseudorandom

    QSETH strikes again: finer quantum lower bounds for lattice problem, strong simulation, hitting set problem, and more

    Full text link
    While seemingly undesirable, it is not a surprising fact that there are certain problems for which quantum computers offer no computational advantage over their respective classical counterparts. Moreover, there are problems for which there is no `useful' computational advantage possible with the current quantum hardware. This situation however can be beneficial if we don't want quantum computers to solve certain problems fast - say problems relevant to post-quantum cryptography. In such a situation, we would like to have evidence that it is difficult to solve those problems on quantum computers; but what is their exact complexity? To do so one has to prove lower bounds, but proving unconditional time lower bounds has never been easy. As a result, resorting to conditional lower bounds has been quite popular in the classical community and is gaining momentum in the quantum community. In this paper, by the use of the QSETH framework [Buhrman-Patro-Speelman 2021], we are able to understand the quantum complexity of a few natural variants of CNFSAT, such as parity-CNFSAT or counting-CNFSAT, and also are able to comment on the non-trivial complexity of approximate-#CNFSAT; both of these have interesting implications about the complexity of (variations of) lattice problems, strong simulation and hitting set problem, and more. In the process, we explore the QSETH framework in greater detail than was (required and) discussed in the original paper, thus also serving as a useful guide on how to effectively use the QSETH framework.Comment: 34 pages, 2 tables, 2 figure

    Constraint-hiding Constrained PRFs for NC1 from LWE

    Get PDF
    Constraint-hiding constrained PRFs (CHCPRFs), initially studied by Boneh, Lewi, and Wu [PKC 2017], are constrained PRFs where the constrained key hides the description of the constraint. Envisioned with powerful applications such as searchable encryption, private-detectable watermarking, and symmetric deniable encryption, the only known candidates of CHCPRFs are based on indistinguishability obfuscation or multilinear maps with strong security properties. In this paper, we construct CHCPRFs for all NC1 circuits from the Learning with Errors assumption. The construction draws heavily from the graph-induced multilinear maps by Gentry, Gorbunov, and Halevi [TCC 2015], as well as the existing lattice-based PRFs. Our construction gives an instance of the GGH15 applications with a security reduction from LWE. We also show how to build from CHCPRFs reusable garbled circuits (RGC), or equivalently private-key function-hiding functional encryptions with 1-key security. This provides a different approach to constructing RGC from that of Goldwasser et al. [STOC 2013]

    HTC-DC Net: Monocular Height Estimation from Single Remote Sensing Images

    Full text link
    3D geo-information is of great significance for understanding the living environment; however, 3D perception from remote sensing data, especially on a large scale, is restricted. To tackle this problem, we propose a method for monocular height estimation from optical imagery, which is currently one of the richest sources of remote sensing data. As an ill-posed problem, monocular height estimation requires well-designed networks for enhanced representations to improve performance. Moreover, the distribution of height values is long-tailed with the low-height pixels, e.g., the background, as the head, and thus trained networks are usually biased and tend to underestimate building heights. To solve the problems, instead of formalizing the problem as a regression task, we propose HTC-DC Net following the classification-regression paradigm, with the head-tail cut (HTC) and the distribution-based constraints (DCs) as the main contributions. HTC-DC Net is composed of the backbone network as the feature extractor, the HTC-AdaBins module, and the hybrid regression process. The HTC-AdaBins module serves as the classification phase to determine bins adaptive to each input image. It is equipped with a vision transformer encoder to incorporate local context with holistic information and involves an HTC to address the long-tailed problem in monocular height estimation for balancing the performances of foreground and background pixels. The hybrid regression process does the regression via the smoothing of bins from the classification phase, which is trained via DCs. The proposed network is tested on three datasets of different resolutions, namely ISPRS Vaihingen (0.09 m), DFC19 (1.3 m) and GBH (3 m). Experimental results show the superiority of the proposed network over existing methods by large margins. Extensive ablation studies demonstrate the effectiveness of each design component.Comment: 18 pages, 10 figures, submitted to IEEE Transactions on Geoscience and Remote Sensin
    • …
    corecore